Stack-Driven Masking for Grammar-Constrained Decoding (GLR + Weighted Automata)

This post is intentionally a “foot-in-the-door” writeup: enough detail to communicate the core idea (and timestamp it), not a full implementation guide. I’ll likely follow up with the compile-time tricks later.

Executive summary

Grammar-constrained generation has two very different problems hiding under one name:

For commit, incremental parsing techniques (LR/GLR + a graph-structured stack) are a good fit.

For get_mask, the usual “simulate candidate tokens through the parser” approach ties runtime to vocabulary size and tokenization ambiguity in ways that are brutal for p99.9 latency.

The idea I want to put on the record:

Invert the problem. Precompile the grammar+tokenizer into a bitset-weighted automaton whose input is the parser stack (or GSS). Then get_mask becomes a single pass that reads stack state IDs and does bitset ∩/∪ operations—no per-vocabulary-token simulation.


Setup

You have a grammar (often “JSON Schema”, though in practice this means the structural subset you can compile into something CFG-like), and you want the LLM to only emit valid outputs.

At each decoding step the model produces logits over the vocabulary VV. You sample a token. You append it to the output. Repeat.

Constrained decoding is: don’t sample tokens that would make the output invalid.

So we maintain a constraint state and, for each step, compute a mask MVM \subseteq V of allowed tokens.

No more broken JSON—at least syntactically.


Two primitive operations

It’s useful to separate constrained decoding into two methods:

At runtime you want a tight loop like:

loop:
  parallel:
    GPU: compute logits
    CPU: compute mask
  apply mask to logits
  sample token
  commit token to:
    - model KV-cache
    - constraint state

This split matters because:


Why worst-case latency matters (p99.9, not p50)

Inference servers batch many sequences on the GPU. The CPU-side mask has to arrive “in time” for each decode step, across the whole batch.

If the GPU step latency is ~1ms in your throughput regime, then you need your mask computation to reliably finish within that same window for every active sequence. A single slow mask can stall the whole batch, which cascades into ugly scheduling behavior.

So it’s not “can you do masking fast on average?” It’s “can you do it fast every single step?”

That’s the lens for everything below.


commit: incremental parsing is a good mental model

When you call commit(state, token), you are incrementally parsing a growing prefix. This is a well-studied problem.

Glossary (to keep levels straight)

There are three different “alphabets” floating around:

Also:

Why GLR/GSS shows up

If your constraints have nondeterminism (common once you compile real schema-ish structures, alternatives, optionality, “oneOf”-style branching, etc.), you often end up with “multiple parses remain viable so far.”

The GLR family’s core trick is: don’t backtrack, represent the ambiguity compactly. You keep a set of stack “heads” in a shared graph.

This tends to work well for commit because:

So far so good.


get_mask: this is where things get ugly

Now you have a parse state (often a GSS), and you want a mask over LLM tokens.

A straightforward approach is:

For each LLM token tt, check whether appending it keeps the parse valid.

But you don’t actually append “an LLM token” to the parser. You append:

Doing that per vocabulary token is a non-starter.

The tokenization ambiguity trap (why the Fibonacci story matters)

Here’s the precise point I want to make with the “112 dashes” anecdote:

Even if you don’t like the specific tokenizer example, incremental lexing can be ambiguous, and ambiguity can explode.

In a toy lexer where the only terminals are - and --, a run of NN dashes has FN+1F_{N+1} segmentations (Fibonacci growth). For N=112N=112, that’s on the order of 102310^{23} ways to segment.

Why does this matter to masking?

Because the naive masking algorithm effectively asks:

“Is there any way to segment this LLM token’s bytes into terminals such that the parser can consume them from the current state?”

If you handle “token → terminal sequence(s)” by exploring a tokenization trellis, then in the worst case you’re exploring an exponential number of segmentations per candidate LLM token.

Most practical systems avoid the full explosion with tries/trellises and memoization. That helps. But it doesn’t change the fundamental shape of the problem: you’re still trying to answer a question about all 200k tokens by simulating “what if we took this token?”

Why tries/trellises still hurt in p99.9 land

A common direction is:

This can work well for many grammars and many workloads. My claim is narrower:

For general CFG-ish constraints + large vocabularies + strict worst-case deadlines, this approach has pathological cases that are hard to engineer away.

The big practical pain points are:

That’s what I mean by “dead end” in the original draft: not “nobody can make it fast,” but “I couldn’t get a per-token / per-trie-path simulation approach to have predictable p99.9 behavior at scale without the complexity ballooning.”


Invert the problem: read the stack, don’t simulate the vocabulary

The key reframing is:

Token validity depends on the parse stack(s).
So instead of asking “does token tt work on this stack?”, ask “given this stack, which tokens work?”

That sounds tautological, but it changes the engineering shape:

Now we need a data structure that turns “stack → allowed tokens” into something we can execute fast.

That’s where the weighted automaton comes in.


The answer (the idea): bitset-weighted automata over parser stacks

What I mean by “weighted automaton” here

Think of a finite automaton, except each transition carries a weight and traversing paths combines weights.

In my setting:

This is exactly the “semiring” you’d expect on sets:

The missing link (made explicit): the stack is the input string

The automaton’s input alphabet is the set of LR parser state IDs.

The automaton reads (a representation of) your current parse configuration:

So the automaton is not replacing the parser. It’s a precompiled masking machine:

Input: current stack / GSS (state IDs)
Output: bitset mask over LLM tokens valid next

Intuition: each token corresponds to “a regular language of stacks”

Another way to see it (which helped me):

That set of stacks is something you can represent with a finite automaton over stack state IDs (a “stack language” of sorts).

If you did that for every tVt \in V, you’d have 200k automata. That’s useless at runtime.

A weighted automaton is how you smash those 200k membership tests into one run:

Runtime sketch

For a single LR stack, the runtime looks like:

def get_mask(stack_state_ids_top_to_bottom):
    # frontier: map automaton_state -> bitset(tokens)
    frontier = { A.start: ALL_TOKENS }

    for sid in stack_state_ids_top_to_bottom:
        new = {}
        for a_state, tokens in frontier.items():
            for (a2, weight) in A.step(a_state, sid):
                tokens2 = tokens & weight            # intersection (filter)
                if tokens2.any():
                    new[a2] = new.get(a2, EMPTY) | tokens2  # union (merge paths)
        frontier = new

        if decided(frontier):
            break

    return combine_accepting(frontier)

For a GSS, you do the same computation but over a graph rather than a single list:

(Details omitted here, but the point is: it’s the same ∩/∪ propagation pattern.)

Why this is the right kind of work for p99.9

Mask computation becomes:

Critically:

The runtime is driven by what the parse state actually contains.

And you can early-exit when further stack depth cannot change the result (the decided(frontier) hook above), which is common if the automaton reaches a fixed point.


How this fits with GLR (commit + mask together)

Putting it together:

So you get a split where:


Why I think this is close to the right shape (without overselling “optimal”)

I’m avoiding “this is optimal” as a theorem, but the shape matches what I want in production:

You still have worst cases (deep stacks exist), but it’s a worst case that comes from the actual output structure (nesting depth), not from adversarial vocabulary/tokenization effects.


Tradeoffs / scope notes (what this post is not claiming)


Where this sits relative to existing constrained decoding libraries

Most constrained decoding systems I’ve seen in the LLM ecosystem do some variant of:

That can be great for many use cases (especially regular/near-regular constraints like JSON).

The approach I’m describing is different in the “mask” half:

I’m not claiming other approaches are “wrong.” I’m saying: if you care about strict worst-case deadlines in a batched inference setting, you want a get_mask whose cost is not dominated by V|V| or tokenization pathologies.


Closing

Constrained generation is often sold as “just mask invalid tokens.” That hides the real split:

My contribution/claim here is the reframing and the data structure:

Precompile the grammar+tokenizer into a bitset-weighted automaton over parser stack state IDs, and compute masks by reading the stack/GSS once with ∩/∪ bitset propagation.

That’s the idea I wanted to put on the record. If there’s interest, the next post would be about the “slow compile” side: how to build/determinize/minimize these automata without exponential blowups, and how to represent the weights without storing a 25KB bitset per edge.